翻訳と辞書
Words near each other
・ Ser humano!!
・ SER O class
・ Sequential access
・ Sequential access memory
・ Sequential algorithm
・ Sequential analysis
・ Sequential art
・ Sequential bilingualism
・ Sequential Circuits
・ Sequential Circuits Prophet 2000
・ Sequential Circuits Prophet-5
・ Sequential Circuits Six-Trak
・ Sequential Circuits Studio 440
・ Sequential consistency
・ Sequential coupling
Sequential decoding
・ Sequential dynamical system
・ Sequential equilibrium
・ Sequential estimation
・ Sequential euclidean distance transforms
・ Sequential function chart
・ Sequential game
・ Sequential hermaphroditism
・ Sequential high-dose chemotherapy
・ Sequential lineups
・ Sequential logic
・ Sequential manual transmission
・ Sequential minimal optimization
・ Sequential model
・ Sequential pattern mining


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Sequential decoding : ウィキペディア英語版
Sequential decoding

Sequential decoding is a limited memory technique for decoding tree codes. Sequential decoding is mainly used is as an approximate decoding algorithm for long constraint-length convolutional codes. This approach may not be as accurate as the Viterbi algorithm but can save a substantial amount of computer memory.
Sequential decoding explores the tree code in such a way to try to minimise the computational cost and memory requirements to store the tree.
There is a range of sequential decoding approaches based on choice of metric and algorithm. Metrics include:
*Fano metric
*Zigangirov metric
*Gallager metric
Algorithms include:
*Stack algorithm
*Fano algorithm
*Creeper algorithm
==Fano metric==

Given a partially explored tree (represented by a set of nodes which are limit of exploration), we would like to know the best node from which to explore further. The Fano metric (named after Robert Fano) allows one to calculate from which is the best node to explore further. This metric is optimal given no other constraints (e.g. memory).
For a binary symmetric channel (with error probability p) the Fano metric can be derived via Bayes theorem. We are interested in following the most likely path P_i given an explored state of the tree X and a received sequence . Using the language of probability and Bayes theorem we want to choose the maximum over i of:
:\Pr(P_i|X,) \propto \Pr(|P_i,X)\Pr(P_i|X)
We now introduce the following notation:
*N to represent the maximum length of transmission in branches
*b to represent the number of bits on a branch of the code (the denominator of the code rate, R).
*d_i to represent the number of bit errors on path P_i (the Hamming distance between the branch labels and the received sequence)
*n_i to be the length of P_i in branches.
We express the likelihood \Pr(|P_i,X) as p^ (1-p)^ 2^ (by using the binary symmetric channel likelihood for the first n_ib bits followed by a uniform prior over the remaining bits).
We express the prior \Pr(P_i|X) in terms of the number of branch choices one has made, n_i, and the number of branches from each node, 2^.
Therefore:
:
\begin
\Pr(P_i|X,) &\propto p^ (1-p)^ 2^ 2^ \\
&\propto p^ (1-p)^ 2^ 2^
\end

We can equivalently maximise the log of this probability, i.e.
:
\begin
&d_i \log_2 p + (n_ib-d_i) \log_2 (1-p) +n_ib-n_iRb
\\= &d_i(\log_2 p +1-R) + (n_ib-d_i)(\log_2 (1-p) + 1-R)
\end

This last expression is the Fano metric. The important point to see is that we have two terms here: one based on the number of wrong bits and one based on the number of right bits. We can therefore update the Fano metric simply by adding \log_2 p +1-R for each non-matching bit and \log_2 (1-p) + 1-R for each matching bit.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Sequential decoding」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.